autonomy level
Autonomy Matters: A Study on Personalization-Privacy Dilemma in LLM Agents
Zhang, Zhiping, Zhang, Yi Evie, Shi, Freda, Li, Tianshi
Large Language Model (LLM) agents require personal information for personalization in order to better act on users' behalf in daily tasks, but this raises privacy concerns and a personalization-privacy dilemma. Agent's autonomy introduces both risks and opportunities, yet its effects remain unclear. To better understand this, we conducted a 3$\times$3 between-subjects experiment ($N=450$) to study how agent's autonomy level and personalization influence users' privacy concerns, trust and willingness to use, as well as the underlying psychological processes. We find that personalization without considering users' privacy preferences increases privacy concerns and decreases trust and willingness to use. Autonomy moderates these effects: Intermediate autonomy flattens the impact of personalization compared to No- and Full autonomy conditions. Our results suggest that rather than aiming for perfect model alignment in output generation, balancing autonomy of agent's action and user control offers a promising path to mitigate the personalization-privacy dilemma.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
Levels of Autonomy for AI Agents
Feng, K. J. Kevin, McDonald, David W., Zhang, Amy X.
Autonomy is a double-edged sword for AI agents, simultaneously unlocking transformative possibilities and serious risks. How can agent developers calibrate the appropriate levels of autonomy at which their agents should operate? We argue that an agent's level of autonomy can be treated as a deliberate design decision, separate from its capability and operational environment. In this work, we define five levels of escalating agent autonomy, characterized by the roles a user can take when interacting with an agent: operator, collaborator, consultant, approver, and observer. Within each level, we describe the ways by which a user can exert control over the agent and open questions for how to design the nature of user-agent interaction. We then highlight a potential application of our framework towards AI autonomy certificates to govern agent behavior in single- and multi-agent systems. We conclude by proposing early ideas for evaluating agents' autonomy. Our work aims to contribute meaningful, practical steps towards responsibly deployed and useful AI agents in the real world.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Artificial Intelligent Disobedience: Rethinking the Agency of Our Artificial Teammates
The field of artificial intelligence is currently abuzz with discussions surrounding "agentic AI" or "AI agents." However, despite the widespread excitement, the term agent itself often lacks a precise, universally agreed-upon definition within these conversations. Recently, significant focus has shifted towards agents built upon large language models (LLMs), leveraging some reasoning and language understanding capabilities to execute complex tasks, interact with external tools, and learn from feedback [53, 56, 63, 66, 67]. This move towards more autonomous, goal-directed LLM systems represents a promising yet challenging frontier in AI development. During this time, AI algorithms have also reached superhuman performance in numerous tasks such as game playing [9,57,62,65] and text and image processing [2, 15, 51]. On the other hand, there are still significant obstacles that modern AI has yet to overcome. Grosz [21] proposed a revised Turing Test to create: "A computer team member that can behave, over the long term and in uncertain, dynamic environments, in such a way that people on the team will not notice that it is not human."
- North America > United States > Virginia (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
Preserving Sense of Agency: User Preferences for Robot Autonomy and User Control across Household Tasks
Yang, Claire, Patel, Heer, Kleiman-Weiner, Max, Cakmak, Maya
-- Roboticists often design with the assumption that assistive robots should be fully autonomous. However, it remains unclear whether users prefer highly autonomous robots, as prior work in assistive robotics suggests otherwise. High robot autonomy can reduce the user's sense of agency, which represents feeling in control of one's environment. How much control do users, in fact, want over the actions of robots used for in-home assistance? We investigate how robot autonomy levels affect users' sense of agency and the autonomy level they prefer in contexts with varying risks. Our study asked participants to rate their sense of agency as robot users across four distinct autonomy levels and ranked their robot preferences with respect to various household tasks. Our findings revealed that participants' sense of agency was primarily influenced by two factors: (1) whether the robot acts autonomously, and (2) whether a third party is involved in the robot's programming or operation. Notably, an end-user programmed robot highly preserved users' sense of agency, even though it acts autonomously. However, in high-risk settings, e.g., preparing a snack for a child with allergies, they preferred robots that prioritized their control significantly more. Additional contextual factors, such as trust in a third party operator, also shaped their preferences.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- (2 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.66)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.68)
Using Fitts' Law to Benchmark Assisted Human-Robot Performance
Pan, Jiahe, Eden, Jonathan, Oetomo, Denny, Johal, Wafa
Shared control systems aim to combine human and robot abilities to improve task performance. However, achieving optimal performance requires that the robot's level of assistance adjusts the operator's cognitive workload in response to the task difficulty. Understanding and dynamically adjusting this balance is crucial to maximizing efficiency and user satisfaction. In this paper, we propose a novel benchmarking method for shared control systems based on Fitts' Law to formally parameterize the difficulty level of a target-reaching task. With this we systematically quantify and model the effect of task difficulty (i.e. size and distance of target) and robot autonomy on task performance and operators' cognitive load and trust levels. Our empirical results (N=24) not only show that both task difficulty and robot autonomy influence task performance, but also that the performance can be modelled using these parameters, which may allow for the generalization of this relationship across more diverse setups. We also found that the users' perceived cognitive load and trust were influenced by these factors. Given the challenges in directly measuring cognitive load in real-time, our adapted Fitts' model presents a potential alternative approach to estimate cognitive load through determining the difficulty level of the task, with the assumption that greater task difficulty results in higher cognitive load levels. We hope that these insights and our proposed framework inspire future works to further investigate the generalizability of the method, ultimately enabling the benchmarking and systematic assessment of shared control quality and user impact, which will aid in the development of more effective and adaptable systems.
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- North America > Costa Rica > Heredia Province > Heredia (0.04)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (0.94)
- Leisure & Entertainment (0.67)
- Education > Assessment & Standards > Measuring Intelligence (0.54)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
A Mini-Review on Mobile Manipulators with Variable Autonomy
Contreras, Cesar Alan, Rastegarpanah, Alireza, Stolkin, Rustam, Chiou, Manolis
This paper presents a mini-review of the current state of research in mobile manipulators with variable levels of autonomy, emphasizing their associated challenges and application environments. The need for mobile manipulators in different environments is evident due to the unique challenges and risks each presents. Many systems deployed in these environments are not fully autonomous, requiring human-robot teaming to ensure safe and reliable operations under uncertainties. Through this analysis, we identify gaps and challenges in the literature on Variable Autonomy, including cognitive workload and communication delays, and propose future directions, including whole-body Variable Autonomy for mobile manipulators, virtual reality frameworks, and large language models to reduce operators' complexity and cognitive load in some challenging and uncertain scenarios.
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.05)
- Europe > United Kingdom (0.04)
- Overview (0.68)
- Research Report (0.64)
- Energy > Power Industry > Utilities > Nuclear (1.00)
- Health & Medicine (0.94)
- Information Technology > Human Computer Interaction > Interfaces > Virtual Reality (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (0.47)
- Information Technology > Artificial Intelligence > Robots > Humanoid Robots (0.36)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.35)
Methods for Combining and Representing Non-Contextual Autonomy Scores for Unmanned Aerial Systems
Hertel, Brendan, Donald, Ryan, Dumas, Christian, Ahmadzadeh, S. Reza
Measuring an overall autonomy score for a robotic system requires the combination of a set of relevant aspects and features of the system that might be measured in different units, qualitative, and/or discordant. In this paper, we build upon an existing non-contextual autonomy framework that measures and combines the Autonomy Level and the Component Performance of a system as overall autonomy score. We examine several methods of combining features, showing how some methods find different rankings of the same data, and we employ the weighted product method to resolve this issue. Furthermore, we introduce the non-contextual autonomy coordinate and represent the overall autonomy of a system with an autonomy distance. We apply our method to a set of seven Unmanned Aerial Systems (UAS) and obtain their absolute autonomy score as well as their relative score with respect to the best system.
- North America > United States > Massachusetts > Middlesex County > Lowell (0.14)
- North America > United States > Mississippi > Warren County > Vicksburg (0.04)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.04)
- Transportation > Air (0.71)
- Transportation > Infrastructure & Services (0.61)
- Government > Military > Army (0.46)
Exploring the Effects of Shared Autonomy on Cognitive Load and Trust in Human-Robot Interaction
Pan, Jiahe, Eden, Jonathan, Oetomo, Denny, Johal, Wafa
Teleoperation is increasingly recognized as a viable solution for deploying robots in hazardous environments. Controlling a robot to perform a complex or demanding task may overload operators resulting in poor performance. To design a robot controller to assist the human in executing such challenging tasks, a comprehensive understanding of the interplay between the robot's autonomous behavior and the operator's internal state is essential. In this paper, we investigate the relationships between robot autonomy and both the human user's cognitive load and trust levels, and the potential existence of three-way interactions in the robot-assisted execution of the task. Our user study (N=24) results indicate that while autonomy level influences the teleoperator's perceived cognitive load and trust, there is no clear interaction between these factors. Instead, these elements appear to operate independently, thus highlighting the need to consider both cognitive load and trust as distinct but interrelated factors in varying the robot autonomy level in shared-control settings. This insight is crucial for the development of more effective and adaptable assistive robotic systems.
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study (0.68)
- Transportation (0.68)
- Leisure & Entertainment (0.68)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
Trust-Preserved Human-Robot Shared Autonomy enabled by Bayesian Relational Event Modeling
Shared autonomy functions as a flexible framework that empowers robots to operate across a spectrum of autonomy levels, allowing for efficient task execution with minimal human oversight. However, humans might be intimidated by the autonomous decision-making capabilities of robots due to perceived risks and a lack of trust. This paper proposed a trust-preserved shared autonomy strategy that grants robots to seamlessly adjust their autonomy level, striving to optimize team performance and enhance their acceptance among human collaborators. By enhancing the Relational Event Modeling framework with Bayesian learning techniques, this paper enables dynamic inference of human trust based solely on time-stamped relational events within human-robot teams. Adopting a longitudinal perspective on trust development and calibration in human-robot teams, the proposed shared autonomy strategy warrants robots to preserve human trust by not only passively adapting to it but also actively participating in trust repair when violations occur. We validate the effectiveness of the proposed approach through a user study on human-robot collaborative search and rescue scenarios. The objective and subjective evaluations demonstrate its merits over teleoperation on both task execution and user acceptability.
- Information Technology > Artificial Intelligence > Robots > Humanoid Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.66)
"I am the follower, also the boss": Exploring Different Levels of Autonomy and Machine Forms of Guiding Robots for the Visually Impaired
Zhang, Yan, Li, Ziang, Guo, Haole, Wang, Luyao, Chen, Qihe, Jiang, Wenjie, Fan, Mingming, Zhou, Guyue, Gong, Jiangtao
Guiding robots, in the form of canes or cars, have recently been explored to assist blind and low vision (BLV) people. Such robots can provide full or partial autonomy when guiding. However, the pros and cons of different forms and autonomy for guiding robots remain unknown. We sought to fill this gap. We designed autonomy-switchable guiding robotic cane and car. We conducted a controlled lab-study (N=12) and a field study (N=9) on BLV. Results showed that full autonomy received better walking performance and subjective ratings in the controlled study, whereas participants used more partial autonomy in the natural environment as demanding more control. Besides, the car robot has demonstrated abilities to provide a higher sense of safety and navigation efficiency compared with the cane robot. Our findings offered empirical evidence about how the BLV community perceived different machine forms and autonomy, which can inform the design of assistive robots.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.67)
- Transportation > Ground > Road (0.46)